15 NP Problems and Advanced Algorithms Techniques

Algorithmics 2025

Graph colouring — Sample Answers

Problem versions

  • Optimisation version: Find a valid colouring that uses the minimum number of colours.
  • Decision version: Given an integer k, can the graph be coloured with at most k colours?
    • This is NP-complete.

Brute force approaches

  • Bitmask / enumeration:
Procedure KnapsackBruteForceBitmask(W, V, C):
    n ← length(W)
    bestValue ← 0

    For mask ← 0 to 2^n - 1 do
        totalW ← Σ (W[i] * bit(mask, i))   // bit(mask,i) = 0 or 1
        totalV ← Σ (V[i] * bit(mask, i))

        if totalW ≤ C then
            bestValue ← max(bestValue, totalV)
        end if
    end for

    return bestValue
  • Recursive generation: Assign colours one vertex at a time. If a conflict arises, stop and backtrack.
Procedure KnapsackRecursive(W, V, C, i):
    // W = weights, V = values, C = remaining capacity, i = current index

    if i = length(W) or C = 0 then
        return 0

    if W[i] > C then
        // Cannot include this item
        return KnapsackRecursive(W, V, C, i+1)
    else
        // Option 1: skip this item
        skip ← KnapsackRecursive(W, V, C, i+1)

        // Option 2: include this item
        take ← V[i] + KnapsackRecursive(W, V, C - W[i], i+1)

        return max(skip, take)

Advanced techniques

  • Backtracking with pruning
    • Assign colours sequentially.
    • If a vertex conflicts with a neighbour, prune that branch.
    • Use heuristics like “most constrained vertex first” to reduce the search tree.
## Algorithm 
G         # graph
Nodes     # array/list of vertices in G, e.g. [v1, v2, ..., vn]
color     # dictionary: vertex -> {0,1,2,3} (0 means uncoloured)
COLORS = [1, 2, 3]

function ColorDFS(pos):
    # Base case: all vertices assigned a color
    if pos = length(Nodes):
        return True

    v ← Nodes[pos]

    # Try each color for v
    for c in COLORS:
        if IsValidColor(v, c):
            color[v] ← c                 # MAKE
            if ColorDFS(pos + 1):        # Recurse to next vertex
                return True
            color[v] ← 0                 # UNMAKE (backtrack)

    return False                         # No color worked for v

function IsValidColor(v, c):
    for w in Neighbours(v):
        if color[w] = c:                 # Adjacent same color? invalid
            return False
    return True
  • Heuristic methods
    • Greedy algorithm: order vertices by degree, assign the lowest available colour.
  • Hill climbing / simulated annealing on the solution space
    • Start with a complete colouring (possibly invalid).
    • Neighbour = recolour one vertex.
    • Hill climbing accepts improvements only; simulated annealing sometimes accepts worse moves to escape local optima.
    • Useful for large graphs, but no guarantee of optimality.

Special cases

  • Trees: Always 2-colourable (graph is bipartite). A BFS or DFS will assign alternating colours in O(V+E).
  • Planar graphs (maps): Always 4-colourable (Four Colour Theorem). Finding the chromatic number for planar graphs is still hard, but we know 4 is always enough.

0–1 Knapsack

Problem versions

  • Optimisation version: Maximise total value subject to capacity C.
  • Decision version: Given target value V*, is there a subset with total value ≥ V* and total weight ≤ C? (NP-complete)

Brute force approaches

  • Bitmask / enumeration: Try all 2^n subsets; each bit decides include/exclude.
Procedure KnapsackBitmask(W, V, C):
    n ← length(W)
    bestValue ← 0

    For mask ← 0 to 2^n − 1 do
        totalW ← Σ (W[i] * bit(mask, i))     // bit(mask,i) ∈ {0,1}
        totalV ← Σ (V[i] * bit(mask, i))     // multiply by mask bit

        if totalW ≤ C then
            bestValue ← max(bestValue, totalV)
        end if
    end for

    return bestValue
  • Recursive include/exclude: Branch on each item: take it or skip it.
Procedure KnapsackRecursive(W, V, C, i):
    if i = length(W) or C = 0 then
        return 0

    if W[i] > C then
        return KnapsackRecursive(W, V, C, i+1)

    skip ← KnapsackRecursive(W, V, C, i+1)
    take ← V[i] + KnapsackRecursive(W, V, C - W[i], i+1)
    return max(skip, take)

Advanced techniques

  • Dynamic programming (Top-down with memoisation)
Procedure KnapsackTopDown(W, V, C):
    memo ← dictionary  // keys: (i, c)

    Function F(i, c):
        if i = length(W) or c = 0 then return 0
        if (i, c) in memo then return memo[(i, c)]
        if W[i] > c then
            ans ← F(i+1, c)
        else
            ans ← max( F(i+1, c), V[i] + F(i+1, c - W[i]) )
        end if
        memo[(i, c)] ← ans
        return ans

    return F(0, C)
  • Dynamic programming (Bottom-up table)
Procedure KnapsackBottomUp(W, V, C):
    n ← length(W)
    dp ← array (n+1) × (C+1) filled with 0
    for i ← 1 to n do
        for c ← 0 to C do
            dp[i][c] ← dp[i-1][c]
            if W[i-1] ≤ c then
                dp[i][c] ← max(dp[i][c], V[i-1] + dp[i-1][c - W[i-1]])
            end if
        end for
    end for
    return dp[n][C]
  • Backtracking with pruning (branch & bound)
Procedure KnapsackBacktrack(W, V, C):
    n ← length(W)
    best ← 0

    // optional: order items by value/weight to tighten bounds

    Procedure DFS(i, w, v, upperBound):
        if w > C then return
        if v + upperBound ≤ best then return   // bound prune
        if i = n then
            best ← max(best, v)
            return
        end if
        // compute new upper bound (e.g., fractional greedy from item i)
        ubNext ← BoundFrom(i, w, v)
        // try include
        DFS(i+1, w + W[i], v + V[i], ubNext)
        // try exclude
        DFS(i+1, w, v, ubNext)

    DFS(0, 0, 0, BoundFrom(0, 0, 0))
    return best
  • Heuristic / metaheuristic methods

    • Greedy by value/weight ratio (exact for fractional knapsack; approximate for 0–1).
    • Hill climbing: start from a subset, swap in/out items to improve value under capacity.
    • Simulated annealing: like hill climbing but sometimes accepts worse moves to escape local optima.
    • Genetic algorithms: evolve populations of bitstrings (subsets) with crossover/mutation.

Special cases

  • Coin change: Special case where every item has weight = 1; often phrased to minimise number of coins.

Perfect — here’s the Travelling Salesman Problem (TSP) section in the same style you used for Graph Colouring and Knapsack, with brute force (bitmask/permutation), backtracking, and heuristics.

Travelling Salesman Problem (TSP) — Sample Answers

Problem versions

  • Optimisation version: Find the Hamiltonian cycle (tour) of minimum total cost visiting every city exactly once.
  • Decision version: Given a cost K, is there a tour of total cost ≤ K?
    • This is NP-complete.

Brute force approaches

  • Bitmask / enumeration of tours: Fix a start city, then generate all permutations of the remaining n−1 cities.
    • There are (n−1)! tours, but since each can be traversed in reverse, only (n−1)! / 2 are unique.
Procedure TSP_BruteForce(D):
    n ← number of cities
    bestCost ← ∞
    For each permutation P of {2..n} do
        tour ← [1] + P + [1]
        cost ← Σ D[tour[i]][tour[i+1]]
        bestCost ← min(bestCost, cost)
    end for
    return bestCost
Procedure Permute(list, k, n):
    if k = n then
        output list
    else
        For i from k to n do
            swap(list[k], list[i])
            Permute(list, k+1, n)
            swap(list[k], list[i])   // backtrack
        EndFor

Advanced techniques

  • Backtracking (branch & bound)

    • Explore tours recursively by adding one city at a time.
    • Keep track of best tour found so far.
    • If partial path cost already exceeds best, prune branch.
    • Exact, but still exponential in worst case.

Heuristic / metaheuristic methods

  • Greedy nearest neighbour

    • Always go to the closest unvisited city.
    • Very fast, but can give poor tours.
  • Minimum spanning tree (MST) approximation

    • Build MST, then do preorder traversal to create a tour.
    • Guarantees cost ≤ 2 × optimal (if triangle inequality holds).
  • Hill climbing (local search)

    • Start with a complete tour.
    • Improve by swapping cities or reversing edges (2-opt).
    • Stops at local minimum.
  • Simulated annealing

    • Like hill climbing, but sometimes accepts worse moves to escape local minima.
    • Good for large instances.
  • A* search with lower bounds

    • Explore partial tours guided by f = g + h.
    • g = cost so far, h = lower bound on remaining cost (e.g. MST of unvisited cities).
    • Exact, but memory-heavy.